53 research outputs found

    "Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot

    Get PDF
    "Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception

    Cerebral correlates and statistical criteria of cross-modal face and voice integration

    Get PDF
    Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define po- tential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face–voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face–voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face– voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with ma- nipulation of stimulus signal-to-noise ratio and/or cross-modal congruency

    Dance and emotion in posterior parietal cortex: a low-frequency rTMS study

    Get PDF
    Background: The neural bases of emotion are most often studied using short non-natural stimuli and assessed using correlational methods. Here we use a brain perturbation approach to make causal inferences between brain activity and emotional reaction to a long segment of dance. <p>Objective/Hypothesis: We aimed to apply offline rTMS over the brain regions involved in subjective emotional ratings to explore whether this could change the appreciation of a dance performance.</p> <p>Methods: We first used functional magnetic resonance imaging (fMRI) to identify regions correlated with fluctuating emotional rating during a 4-minutes dance performance, looking at both positive and negative correlation. Identified regions were further characterized using meta-data interrogation. Low frequency repetitive TMS was applied over the most important node in a different group of participants prior to them rating the same dance performance as in the fMRI session.</p> <p>Results: FMRI revealed a negative correlation between subjective emotional judgment and activity in the right posterior parietal cortex. This region is commonly involved in cognitive tasks and not in emotional task. Parietal rTMS had no effect on the general affective response, but it significantly (p<0.05 using exact t-statistics) enhanced the rating of the moment eliciting the highest positive judgments.</p> <p>Conclusion: These results establish a direct link between posterior parietal cortex activity and emotional reaction to dance. They can be interpreted in the framework of competition between resources allocated to emotion and resources allocated to cognitive functions. They highlight potential use of brain stimulation in neuro-æsthetic investigations.</p&gt

    Motor simulation without motor expertise: enhanced corticospinal excitability in visually experienced dance spectators

    Get PDF
    The human “mirror-system” is suggested to play a crucial role in action observation and execution, and is characterized by activity in the premotor and parietal cortices during the passive observation of movements. The previous motor experience of the observer has been shown to enhance the activity in this network. Yet visual experience could also have a determinant influence when watching more complex actions, as in dance performances. Here we tested the impact visual experience has on motor simulation when watching dance, by measuring changes in corticospinal excitability. We also tested the effects of empathic abilities. To fully match the participants' long-term visual experience with the present experimental setting, we used three live solo dance performances: ballet, Indian dance, and non-dance. Participants were either frequent dance spectators of ballet or Indian dance, or “novices” who never watched dance. None of the spectators had been physically trained in these dance styles. Transcranial magnetic stimulation was used to measure corticospinal excitability by means of motor-evoked potentials (MEPs) in both the hand and the arm, because the hand is specifically used in Indian dance and the arm is frequently engaged in ballet dance movements. We observed that frequent ballet spectators showed larger MEP amplitudes in the arm muscles when watching ballet compared to when they watched other performances. We also found that the higher Indian dance spectators scored on the fantasy subscale of the Interpersonal Reactivity Index, the larger their MEPs were in the arms when watching Indian dance. Our results show that even without physical training, corticospinal excitability can be enhanced as a function of either visual experience or the tendency to imaginatively transpose oneself into fictional characters. We suggest that spectators covertly simulate the movements for which they have acquired visual experience, and that empathic abilities heighten motor resonance during dance observation

    Blur resolved OCT: full-range interferometric synthetic aperture microscopy through dispersion encoding

    Get PDF
    We present a computational method for full-range interferometric synthetic aperture microscopy (ISAM) under dispersion encoding. With this, one can effectively double the depth range of optical coherence tomography (OCT), whilst dramatically enhancing the spatial resolution away from the focal plane. To this end, we propose a model-based iterative reconstruction (MBIR) method, where ISAM is directly considered in an optimization approach, and we make the discovery that sparsity promoting regularization effectively recovers the full-range signal. Within this work, we adopt an optimal nonuniform discrete fast Fourier transform (NUFFT) implementation of ISAM, which is both fast and numerically stable throughout iterations. We validate our method with several complex samples, scanned with a commercial SD-OCT system with no hardware modification. With this, we both demonstrate full-range ISAM imaging, and significantly outperform combinations of existing methods.Comment: 17 pages, 7 figures. The images have been compressed for arxiv - please follow DOI for full resolutio

    perceiving animacy and arousal in transformed displays of human interaction

    Get PDF
    When viewing a moving abstract stimulus, people tend to attribute social meaning and purpose to the movement. The classic work of Heider and Simmel [1] investigated how observers would describe movement of simple geometric shapes (circle, triangles, and a square) around a screen. A high proportion of participants reported seeing some form of purposeful interaction between the three abstract objects and defining this interaction as a social encounter. Various papers have subsequently found similar results [2,3] and gone on to show that, as Heider and Simmel suggested, the phenomenon was due more to the relationship in space and time of the objects, rather than any particular object characteristic. The research of Tremoulet and Feldman [4] has shown that the percept of animacy may be elicited with a solitary moving object. They asked observers to rate the movement of a single dot or rectangle for whether it was under the influence of an external force, or whether it was in control of its own motion. At mid-trajectory the shape would change speed or direction, or both. They found that shapes that either changed direction greater than 25 degrees from the original trajectory, or changed speed, were judged to be "more alive" than others. Further discussion and evidence of animacy with one or two small dots can be found in Gelman, Durgin and Kaufman [5] Our aim was to further study this phenomenon by using a different method of stimulus production. Previous methods for producing displays of animate objects have relied either on handcrafted stimuli or on parametric variations of simple motion patterns. It is our aim to work towards a new automatic approach by taking actual human movements, transforming them into basic shapes, and exploring what motion properties need to be preserved to obtain animacy. Though the phenomenon of animacy has been shown for many years, using various different displays, very few specific criteria have been set on the essential characteristics of the displays. Part of this research is to try and establish what movements result in percepts of animacy, and in turn, to give further understanding of essential characteristics of human movement and social interaction. In this paper we discuss two experiments in which we examine how different transformations of an original video of a dance influences perception of animacy. We also examine reports of arousal, Experiment 1, and emotional engagement in Experiment 2

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans “program” behavior in-or train-each other

    The features people use to recognize human movement style

    No full text

    The perception of motion and structure in structure-from-motion: comparisons of affine and Euclidean formulations

    No full text
    I investigated the discrimination of rigid from nonrigid structure and the perception of affine stretches along the line of sight [Norman & Todd (1993). Perception and Psychophysics, 53, pp. 279-291]. Investigations of performance at discriminating rigid from nonrigid structure showed that performance improved when number of views and amount of simulated three-dimensional nonrigidity increased. Investigations of rotations about the vertical which include affine stretches along the line of sight compared Euclidean interpretations of affine-stretching stimuli to human perception. These Euclidean interpretations were obtained from a simple algorithm which recovered structure and motion from this limited class of stimuli under the assumption that distances to the axis of rotation did not change. The algorithm predicted that stretches along the line of sight would be perceived as nearly rigid and have variable angular velocity. These predictions were supported by subjects' reports of occurrences of nonrigidity and minima of angular velocity. The Euclidean algorithm also provided measures of nonrigidity and motion coherence, and experimental results were consistent with a prediction of when perception of nonrigidity would be independent of perception of coherence. The results are discussed relative to the advantages and shortcomings of both the affine and Euclidean approaches to structure-from-motion

    Virtual surfaces and the influence of visual cues on grasp

    No full text
    This research compared grasps to real surfaces with grasps to virtual surfaces, and used virtual surfaces to examine the role of cues to surface shape in grasp. The first experiment investigated the kinematics of overhand grasps to real and virtual objects. The results showed that compared to grasps to real surfaces, grasps to virtual objects were different in the deceleration phase and were more variable in their endpoint position. The second experiment examined how, for several real and virtual surface conditions, the decision to use either an over or underhand grasp switched as a function of object orientation and compared these results with data obtained for visual matching of orientation to the same stimuli. The decision to switch between types of grasp showed no difference between actual grasp movements and verbal reports of grasp choice that were not accompanied by arm movement. It was also found that variability in the decision to switch between over and under hand grasps was correlated to variability in visual matching of orientation. The third experiment used virtual surfaces to examine how the removal of visual cues to shape affected the decision to switch from over to underhand grasp. Results showed that the orientation at which the decision switched was dependent on the visual information content. In summary, it was shown that grasps were affected by the information available in virtual surfaces and that, compared with real surfaces, grasps to virtual surfaces appear identical in choice of a grasp type, but differ in the kinematics of the execution of the grasp
    corecore